53 research outputs found

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Simulating Tail Probabilities in GI/GI.1 Queues and Insurance Risk Processes with Subexponentail Distributions

    Get PDF
    This paper deals with estimating small tail probabilities of thesteady-state waiting time in a GI/GI/1 queue with heavy-tailed (subexponential) service times. The problem of estimating infinite horizon ruin probabilities in insurance risk processes with heavy-tailed claims can be transformed into the same framework. It is well-known that naive simulation is ineffective for estimating small probabilities and special fast simulation techniques like importance sampling, multilevel splitting, etc., have to be used. Though there exists a vast amount of literature on the rare event simulation of queuing systems and networks with light-tailed distributions, previous fast simulation techniques for queues with subexponential service times have been confined to the M/GI/1 queue. The general approach is to use the Pollaczek-Khintchine transformation to convert the problem into that of estimating the tail distribution of a geometric sum of independent subexponential random variables. However, no such useful transformation exists when one goes from Poisson arrivals to general interarrival-time distributions. We describe and evaluate an approach that is based on directly simulating the random walk associated with the waiting-time process of the GI/GI/1 queue, using a change of measure called delayed subexponential twisting -an importance sampling idea recently developed and found useful in the context of M/GI/1 heavy-tailed simulations

    Fast simulation of packet loss rates in a shared buffer communications switch

    Get PDF
    This paper describes an efficient technique for estimating, via simulation, the probability of buffer overflows in a queueing model that arises in the analysis of ATM (Asynchronous Transfer Mode) communication switches. There are multiple streams of (autocorrelated) traffic feeding the switch that has a buffer of finite capacity. Each stream is designated as either being of high or low priority. When the queue length reaches a certain threshold, only high priority packets are admitted to the switch's buffer. The problem is to estimate the loss rate of high priority packets. An asymptotically optimal importance sampling approach is developed for this rare event simulation problem. In this approach, the importance sampling is done in two distinct phases. In the first phase, an importance sampling change of measure is used to bring the queue length up to the threshold at which low priority packets get rejected. In the second phase, a different importance sampling change of measure is used to move the queue length from the threshold to the buffer capacity

    Importance Sampling for the Simulation of Highly Reliable Markovian Systems

    No full text
    In this paper we investigate importance sampling techniques for the simulation of Markovian systems with highly reliable components. The need for simulation arises because the state space of such systems is typically huge, making numerical computation inefficient. Naive simulation is inefficient due to the rarity of the system failure events. Failure biasing is a useful importance sampling technique for the simulation of such systems. However, until now, this technique has been largely heuristic. We present a mathematical framework for the study of failure biasing. Using this framework we derive variance reduction results which explain the orders of magnitude of variance reduction obtained in practice. We show that in many cases the magnitude of the variance reduction is such that the relative errors of the estimates remain bounded as the failure rates of components tend to zero. We also prove that the failure biasing heuristic in its original form may not give bounded relative error for a large class of systems and that a modification of the heuristic works for the general case. The theoretical results in this paper agree with experiments on the subject which have been reported in a previous paper.stochastic simulation, importance sampling, failure biasing, Markov chains, reliability, availability, mean time to failure

    Rare event simulation techniques for models of computer and communication systems

    No full text
    This talk reviews some of the fast simulation techniques used for estimating probabilities of rare events and related quantities in stochastic models of computer and communication systems. It is by no means a complete survey of these rare event simulation techniques. However, an attempt will be made to give some of the basic concepts, intuitions, and algorithms used for different types of stochastic models. The reader is referred to Heidelberger (1995) and Shahabuddin (1995) for recent comprehensive surveys in this area, and the reference list of Boots and Shahabuddin (2000) for some of the later papers in this area. Estimations of the small probabilities of rare events are required in the design and operation of many engineering systems. Consider the case of a telecommunication network. It is customary to model such systems as network of queues, with each queue having a buffer of finite capacity. Information packets that arrive to a queue when its buffer is full are lost. The rare event of interest may be the event of a packet being lost. Current regulations stipulate that the probability of packet loss should not exceed 10 to the power-9. Or in a reliability model of a space craft computer, we may be interested in estimating the probability of the event that th

    Fast Simulation of Markov Chains with Small Transition Probabilities

    No full text
    Consider a finite-state Markov chain where the transition probabilities differ by orders of magnitude. This Markov chain has an "attractor state," i.e., from any state of the Markov chain there exists a sample path of significant probability to the attractor state. There also exists a "rare set," which is accessible from the attractor state only by sample paths of very small probability. The problem is to estimate the probability that starting from the attractor state, the Markov chain hits the rare set before returning to the attractor state. Examples of this setting arise in the case of reliability models with highly reliable components as well as in the case of queueing networks with low traffic. Importance-sampling is a commonly used simulation technique for the fast estimation of rare-event probabilities. It involves simulating the Markov chain under a new probability measure that emphasizes the most likely paths to the rare set. Previous research focused on developing importance-sampling schemes for a special case of Markov chains that did not include "high-probability cycles." We show through examples that the Markov chains used to model many commonly encountered systems do have high-probability cycles, and existing importance-sampling schemes can lead to infinite variance in simulating such systems. We then develop the insight that in the presence of high-probability cycles care should be taken in allocating the new transition probabilities so that the variance accumulated over these cycles does not increase without bounds. Based on this observation we develop two importance-sampling techniques that have the bounded relative error property, i.e., the simulation run-length required to estimate the rare-event probability to a fixed degree of accuracy remains bounded as the event of interest becomes more rare.Simulation, Markov Chains, Reliability Models, Steady-State Measures, Importance-Sampling

    Importance Sampling and Stratification for Value-at-Risk

    No full text
    This paper proposes and evaluates variance reduction techniques for efficient estimation of portfolio loss probabilities using Monte Carlo simulation. Precise estimation of loss probabilities is essential to calculating value-at-risk, which is simply a percentile of the loss distribution. The methods we develop build on delta-gamma approximations to changes in portfolio value. The simplest way to use such approximations for variance reduction employs them as control variates; we show, however, that far greater variance reduction is possible if the approximations are used as a basis for importance sampling, stratified sampling, or combinations of the two. This is especially true in estimating very small loss probabilities. 1 Introduction Value-at-Risk (VAR) has become an important measure for estimating and managing portfolio risk [11, 13]. VAR is defined as a certain quantile of the change in a portfolio's value during a specified holding period. To be more specific, suppose the curre..
    corecore